SNR: <u>S</u> queezing <u>N</u> umerical <u>R</u> ange Defuses Bit Error Vulnerability Surface in Deep Neural Networks

نویسندگان

چکیده

As deep learning algorithms are widely adopted, an increasing number of them positioned in embedded application domains with strict reliability constraints. The expenditure significant resources to satisfy performance requirements neural network accelerators has thinned out the margins for delivering safety applications, thus precluding adoption conventional fault tolerance methods. potential exploiting inherent resilience characteristics networks remains though unexplored, offering a promising low-cost path towards applications. This work demonstrates possibility such exploitation by juxtaposing reduction vulnerability surface through proper design quantization schemes shaping parameter distributions at each layer guidance offered appropriate training methods, high merely algorithmic modifications. Unequaled error can be injected into safety-critical applications tolerate bit rates up absolutely zero hardware, energy, and costs while improving error-free model accuracy even further.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

BitNet: Bit-Regularized Deep Neural Networks

We present a novel regularization scheme for training deep neural networks. The parameters of neural networks are usually unconstrained and have a dynamic range dispersed over the real line. Our key idea is to control the expressive power of the network by dynamically quantizing the range and set of values that the parameters can take. We formulate this idea using a novel end-to-end approach th...

متن کامل

ANGE - Automatic Neural Generator

Artificial Neural Networks have a wide application in terms of research areas but they have never really lived up to the promise they seemed to be in the beginning of the 80s. One of the reasons for this is the lack of hardware for their implementation in a straightforward and simple way. This paper presents a tool to respond to this need: An Automatic Neural Generator. The generator allows a u...

متن کامل

Bit Error Rate is Convex at High SNR

Motivated by a wide-spread use of convex optimization techniques, convexity properties of bit error rate of the maximum likelihood detector operating in the AWGN channel are studied for arbitrary constellations and bit mappings, which may also include coding under maximum-likelihood decoding. Under this generic setting, the pairwise probability of error and bit error rate are shown to be convex...

متن کامل

Learning Accurate Low-Bit Deep Neural Networks with Stochastic Quantization

Low-bit deep neural networks (DNNs) become critical for embedded applications due to their low storage requirement and computing efficiency. However, they suffer much from the non-negligible accuracy drop. This paper proposes the stochastic quantization (SQ) algorithm for learning accurate low-bit DNNs. The motivation is due to the following observation. Existing training algorithms approximate...

متن کامل

Bit-Serial Neural Networks

573 A bit serial VLSI neural network is described from an initial architecture for a synapse array through to silicon layout and board design. The issues surrounding bit serial computation, and analog/digital arithmetic are discussed and the parallel development of a hybrid analog/digital neural network is outlined. Learning and recall capabilities are reported for the bit serial network along ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: ACM Transactions in Embedded Computing Systems

سال: 2021

ISSN: ['1539-9087', '1558-3465']

DOI: https://doi.org/10.1145/3477007